This repository was archived by the owner on Nov 10, 2025. It is now read-only.
Fix MDXSearchTool to work with local Ollama models without OpenAI API key#471
Closed
devin-ai-integration[bot] wants to merge 2 commits intomainfrom
Closed
Fix MDXSearchTool to work with local Ollama models without OpenAI API key#471devin-ai-integration[bot] wants to merge 2 commits intomainfrom
devin-ai-integration[bot] wants to merge 2 commits intomainfrom
Conversation
… key - Modified _parse_config in RagTool to properly handle llm+embedder config - Added direct OllamaEmbeddingFunction instantiation for Ollama provider - Added comprehensive tests for local provider configurations - Maintains backward compatibility with existing OpenAI configurations - Added ollama dependency to support OllamaEmbeddingFunction Fixes #3622 Co-Authored-By: João <joao@crewai.com>
Contributor
Author
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
…g setup - Fix config parsing to respect vectordb settings when llm+embedder are present - Add proper validation for Ollama model_name requirement - Improve error handling with ImportError instead of generic Exception - Ensure fallback to get_embedding_function uses correct config keys Co-Authored-By: João <joao@crewai.com>
Contributor
Author
|
Closing due to inactivity for more than 7 days. Configure here. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fix MDXSearchTool to work with local Ollama models without OpenAI API key
Summary
Fixes issue #3622 where MDXSearchTool required an OpenAI API key even when configured to use local Ollama models for both LLM and embedder. The tool now works entirely offline with local models.
Key Changes:
_parse_configin RagTool to properly handle configurations with bothllmandembedderspecified as non-OpenAI providersOllamaEmbeddingFunctioninstantiation for Ollama embeddings to bypass OpenAI validationollama>=0.6.0dependency to support local embedding functionsRoot Cause: The config parsing logic wasn't properly extracting embedding configuration when both LLM and embedder were configured as local providers, causing it to fall back to OpenAI validation.
Review & Testing Checklist for Human (3 items - Yellow Risk)
model→EMBEDDINGS_OLLAMA_MODEL_NAME,url→EMBEDDINGS_OLLAMA_URL) match what the CrewAI RAG system actually expectsTest Plan Recommendation
Notes
Note
Enable MDXSearchTool to run fully offline with local Ollama LLM/embeddings by updating config parsing and adding tests, plus
ollamadependency.crewai_tools/tools/rag/rag_tool.pyconfig parsing to support non-OpenAIllmandembedderproviders.OllamaEmbeddingFunctiondirectly for local embeddings; preserve existing OpenAI flow.tests/tools/test_mdx_search_tool_local_config.pycovering local Ollama configurations for LLM and embeddings.ollama>=0.6.0inpyproject.toml.Written by Cursor Bugbot for commit 36a2f46. This will update automatically on new commits. Configure here.